Globally convergent limited memory bundle method for large-scale nonsmooth optimization
نویسندگان
چکیده
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of thousands of variables. In the paper [Haarala, Miettinen, Mäkelä, Optimization Methods and Software, 19, (2004), pp. 673–692] we have described an efficient method for large-scale nonsmooth optimization. In this paper, we introduce a new variant of this method and prove its global convergence for locally Lipschitz continuous objective functions, which are not necessarily differentiable or convex. In addition, we give some encouraging results from numerical experiments.
منابع مشابه
Limited Memory Bundle Method for Large Bound Constrained Nonsmooth Optimization
1. Abstract Practical optimization problems often involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such large problems are restricted to certain meaningful intervals. In the report [Haarala, Mäkelä, 2006] we have described an efficient adaptive limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization...
متن کاملLimited memory bundle method for large bound constrained nonsmooth optimization: convergence analysis
Practical optimization problems often involve nonsmooth functions of hundreds or thousands of variables. As a rule, the variables in such large problems are restricted to certain meaningful intervals. In the paper [Karmitsa, Mäkelä, 2009] we described an efficient limited memory bundle method for large-scale nonsmooth, possibly nonconvex, bound constrained optimization. Although this method wor...
متن کاملLMBM — FORTRAN Subroutines for Large-Scale Nonsmooth Minimization: User’s Manual
LMBM is a limited memory bundle method for large-scale nonsmooth, possibly nonconvex, optimization. It is intended for problems that are difficult or even impossible to solve with classical gradient-based optimization methods due to nonsmoothness and for problems that can not be solved efficiently with standard nonsmooth optimization methods (like proximal bundle and bundle trust methods) due t...
متن کاملLimited memory interior point bundle method for large inequality constrained nonsmooth minimization
Many practical optimization problems involve nonsmooth (that is, not necessarily differentiable) functions of hundreds or thousands of variables with various constraints. In this paper, we describe a new efficient adaptive limited memory interior point bundle method for large, possible nonconvex, nonsmooth inequality constrained optimization. The method is a hybrid of the nonsmooth variable met...
متن کاملNew Quasi-Newton Optimization Methods for Machine Learning
This thesis develops new quasi-Newton optimization methods that exploit the wellstructured functional form of objective functions often encountered in machine learning, while still maintaining the solid foundation of the standard BFGS quasi-Newton method. In particular, our algorithms are tailored for two categories of machine learning problems: (1) regularized risk minimization problems with c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Math. Program.
دوره 109 شماره
صفحات -
تاریخ انتشار 2007